Goto

Collaborating Authors

 characterize trend


Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making

Neural Information Processing Systems

We propose Confusions over Time (CoT), a novel generative framework which facilitates a multi-granular analysis of the decision making process. The CoT not only models the confusions or error properties of individual decision makers and their evolution over time, but also allows us to obtain diagnostic insights into the collective decision making process in an interpretable manner.


Reviews: Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making

Neural Information Processing Systems

The authors motivate the proposed model with the setting in which items have "true" but unobserved labels/ratings and the observed labels/ratings given by evaluators are potentially incorrect. This differs from the very common problem in recommendation systems or collaborative filtering where evaluators provide their subjective ratings but there is not assumed to be any "true" rating (e.g., users of Netflix giving 1-5 star ratings to movies). This seems like a common but underexplored setting that is worthy of further study within machine learning. The authors are also right to highlight interpretability as a desired aspect of any machine learning solution that may yield post-hoc insights into common human biases and thus suggest corrective measures. This paper does a good job of motivating the proposed model and situating it within the crowdsourcing and human annotation literature.


Confusions over Time: An Interpretable Bayesian Model to Characterize Trends in Decision Making

Lakkaraju, Himabindu, Leskovec, Jure

Neural Information Processing Systems

We propose Confusions over Time (CoT), a novel generative framework which facilitates a multi-granular analysis of the decision making process. The CoT not only models the confusions or error properties of individual decision makers and their evolution over time, but also allows us to obtain diagnostic insights into the collective decision making process in an interpretable manner. Interpretable insights are obtained by grouping similar decision makers (and items being judged) into clusters and representing each such cluster with an appropriate prototype and identifying the most important features characterizing the cluster via a subspace feature indicator vector. Experimentation with real world data on bail decisions, asthma treatments, and insurance policy approval decisions demonstrates that CoT can accurately model and explain the confusions of decision makers and their evolution over time. Papers published at the Neural Information Processing Systems Conference.